📚 node [[hinge_loss|hinge loss]]
Welcome! Nobody has contributed anything to 'hinge_loss|hinge loss' yet. You can:
-
Write something in the document below!
- There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
- Write to the Agora from social media.
-
Sign up as a full Agora user.
- As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[hinge_loss]]
⥅ related node [[squared_hinge_loss]]
⥅ node [[hinge_loss]] pulled by Agora
📓
garden/KGBicheno/Artificial Intelligence/Introduction to AI/Week 3 - Introduction/Definitions/Hinge_Loss.md by @KGBicheno
hinge loss
Go back to the [[AI Glossary]]
A family of loss functions for classification designed to find the decision boundary as distant as possible from each training example, thus maximizing the margin between examples and the boundary. KSVMs use hinge loss (or a related function, such as squared hinge loss). For binary classification, the hinge loss function is defined as follows:
$$ loss = max(0,1 -(y*y^1)) $$
where y is the true label, either -1 or +1, and y' is the raw output of the classifier model:
$$ Y^1 = b + w_1 x_1 + w_2 x_2 + ... w_n x_n $$
Consequently, a plot of hinge loss vs. (y * y') looks as follows:
📖 stoas
- public document at doc.anagora.org/hinge_loss|hinge-loss
- video call at meet.jit.si/hinge_loss|hinge-loss
🔎 full text search for 'hinge_loss|hinge loss'